-
Notifications
You must be signed in to change notification settings - Fork 738
Add C++ runtime for spleeter about source separation #2242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
太效率了,必须致敬一下! |
我用Voc_FT提取了上面两段音频,效果可以对比下。 1_main.mov2_main.mov |
请问你说的这个模型,在cpu上的速度如何?能否测试下RTF? |
我的测试机器是Macmini M4Pro芯片,用Java调用onnxruntime加载的。 |
你这个效果太惊艳了,请问哪里可以下载mdx-net模型的onnx版本,我非常需要这个模型。顺便问下比如商场这种人流噪音下的录音也一样可以处理吗? |
@dfengpo 降噪我不确定,手里没有相应的测试素材 |
请问下你上面的分离示例,是使用以下脚本跑出来的结果吗? |
用的这个https://github.com/acely/uvr-mdx-infer/blob/main/separate.py |
Usage
Build sherpa-onnx and download model files
Run it with audio_example.wav
The output logs are
Note: It runs on my macOS (x64) with CPU.
audio_example.wav
audio_example.mov
output_vocals.wav for audio_example.wav
output_vocals.mov
output_accompaniment.wav for audio_example.wav
output_accompaniment.mov
Run it with qi-fen-le-zh.wav
Output logs are
qi-fen-le-zh.wav
qi-feng-le-zh.mov
vocals for qi-fen-le-zh.wav
output_vocals.mov
accompaniment for qi-fen-le-zh.wav
output_accompaniment.mov
Fixes #2235
CC @acely